Description
This project develops an end-to-end pipeline for real-time, two-way sign language translation across different sign languages, using a spoken language as an intermediate. Unlike most prior systems, which focus on one-way sign-to-text or sign-to-speech translation in a single language, this work addresses the multilingual communication gap faced by Deaf communities in global contexts.
The proposed architecture is fully on-device, operating on edge hardware such as smartphones. It integrates:
• Pose and RGB data capture for sign recognition
• Deep learning inference models for recognition and translation
• LLM-based spoken language mediation (e.g., English)
• Sign re-generation in the target language (e.g., ISL)
By demonstrating an ASL–English–ISL translation pipeline, this work establishes the feasibility of real-time, privacy-preserving, and scalable sign-to-sign translation. The system reduces reliance on internet connectivity and cloud services, making it more practical for use in healthcare, education, cultural exchange, and daily communication.
